Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Language
Document Type
Year range
1.
arxiv; 2023.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2303.13567v2

ABSTRACT

While it is well known that population differences from genetics, sex, race, and environmental factors contribute to disease, AI studies in medicine have largely focused on locoregional patient cohorts with less diverse data sources. Such limitation stems from barriers to large-scale data share and ethical concerns over data privacy. Federated learning (FL) is one potential pathway for AI development that enables learning across hospitals without data share. In this study, we show the results of various FL strategies on one of the largest and most diverse COVID-19 chest CT datasets: 21 participating hospitals across five continents that comprise >10,000 patients with >1 million images. We also propose an FL strategy that leverages synthetically generated data to overcome class and size imbalances. We also describe the sources of data heterogeneity in the context of FL, and show how even among the correctly labeled populations, disparities can arise due to these biases.


Subject(s)
COVID-19
2.
Mona Flores; Ittai Dayan; Holger Roth; Aoxiao Zhong; Ahmed Harouni; Amilcare Gentili; Anas Abidin; Andrew Liu; Anthony Costa; Bradford Wood; Chien-Sung Tsai; Chih-Hung Wang; Chun-Nan Hsu; CK Lee; Colleen Ruan; Daguang Xu; Dufan Wu; Eddie Huang; Felipe Kitamura; Griffin Lacey; Gustavo César de Antônio Corradi; Hao-Hsin Shin; Hirofumi Obinata; Hui Ren; Jason Crane; Jesse Tetreault; Jiahui Guan; John Garrett; Jung Gil Park; Keith Dreyer; Krishna Juluru; Kristopher Kersten; Marcio Aloisio Bezerra Cavalcanti Rockenbach; Marius Linguraru; Masoom Haider; Meena AbdelMaseeh; Nicola Rieke; Pablo Damasceno; Pedro Mario Cruz e Silva; Pochuan Wang; Sheng Xu; Shuichi Kawano; Sira Sriswasdi; Soo Young Park; Thomas Grist; Varun Buch; Watsamon Jantarabenjakul; Weichung Wang; Won Young Tak; Xiang Li; Xihong Lin; Fred Kwon; Fiona Gilbert; Josh Kaggie; Quanzheng Li; Abood Quraini; Andrew Feng; Andrew Priest; Baris Turkbey; Benjamin Glicksberg; Bernardo Bizzo; Byung Seok Kim; Carlos Tor-Diez; Chia-Cheng Lee; Chia-Jung Hsu; Chin Lin; Chiu-Ling Lai; Christopher Hess; Colin Compas; Deepi Bhatia; Eric Oermann; Evan Leibovitz; Hisashi Sasaki; Hitoshi Mori; Isaac Yang; Jae Ho Sohn; Krishna Nand Keshava Murthy; Li-Chen Fu; Matheus Ribeiro Furtado de Mendonça; Mike Fralick; Min Kyu Kang; Mohammad Adil; Natalie Gangai; Peerapon Vateekul; Pierre Elnajjar; Sarah Hickman; Sharmila Majumdar; Shelley McLeod; Sheridan Reed; Stefan Graf; Stephanie Harmon; Tatsuya Kodama; Thanyawee Puthanakit; Tony Mazzulli; Vitor de Lima Lavor; Yothin Rakvongthai; Yu Rim Lee; Yuhong Wen.
researchsquare; 2020.
Preprint in English | PREPRINT-RESEARCHSQUARE | ID: ppzbmed-10.21203.rs.3.rs-126892.v1

ABSTRACT

‘Federated Learning’ (FL) is a method to train Artificial Intelligence (AI) models with data from multiple sources while maintaining anonymity of the data thus removing many barriers to data sharing. During the SARS-COV-2 pandemic, 20 institutes collaborated on a healthcare FL study to predict future oxygen requirements of infected patients using inputs of vital signs, laboratory data, and chest x-rays, constituting the “EXAM” (EMR CXR AI Model) model. EXAM achieved an average Area Under the Curve (AUC) of over 0.92, an average improvement of 16%, and a 38% increase in generalisability over local models. The FL paradigm was successfully applied to facilitate a rapid data science collaboration without data exchange, resulting in a model that generalised across heterogeneous, unharmonized datasets. This provided the broader healthcare community with a validated model to respond to COVID-19 challenges, as well as set the stage for broader use of FL in healthcare.


Subject(s)
COVID-19 , Infections
SELECTION OF CITATIONS
SEARCH DETAIL